SURF-ing a Model of Spatiotemporal Saliency
نویسنده
چکیده
Zhai and Shah (2006) proposed a model of spatiotemporal saliency using a combination of temporal and spatial attention models. The temporal model utilized Lowe’s SIFT (2004) to compute feature points and the correspondences between them in successive frames. Bay, Tuytelaars, & Van Gool introduced SURF (2006) as an alternative feature detector and descriptor. The authors of SURF show that it is faster than and superior to SIFT. This investigation replicated the model of Zhai and Shah evaluating performance with SURF used in place of SIFT. The performance of SIFT and SURF temporal model variants was tested on a variety of frame sets. In addition to qualitative comparisons, the SIFT and SURF model variants were quantitatively compared on the speed of both computing feature correspondences, and the resulting speed of homography computation. The robustness of the models to increased temporal spacing between frames was also tested at various step intervals. The SURF model was much faster as expected. Otherwise the models were generally indistinguishable. Both showed little change in performance due to an increase in the time between successive frames.
منابع مشابه
A Comparison of Feature Detectors with Passive and Task-Based Visual Saliency
This paper investigates the coincidence between six interest point detection methods (SIFT, MSER, Harris-Laplace, SURF, FAST & Kadir-Brady Saliency) with two robust “bottom-up” models of visual saliency (Itti and Harel) as well as “task” salient surfaces derived from observer eye-tracking data. Comprehensive statistics for all detectors vs. saliency models are presented in the presence and abse...
متن کاملA parallel spatiotemporal saliency and discriminative online learning method for visual target tracking in aerial videos
Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these m...
متن کاملSalient Region Detection in Video Using Spatiotemporal Visual Attention Model
Abstract Salient region detection is very useful in video analysis. A salient region detection method based on spatiotemporal visual attention model is proposed in this paper. Visual attention mechanism is used to generate saliency map of the image sequence. Spatial saliency map is computed in accordance with some predefined features including intensity, color and orientation. Temporal visual s...
متن کاملGraph-based Visual Saliency Model using Background Color
Visual saliency is a cognitive psychology concept that makes some stimuli of a scene stand out relative to their neighbors and attract our attention. Computing visual saliency is a topic of recent interest. Here, we propose a graph-based method for saliency detection, which contains three stages: pre-processing, initial saliency detection and final saliency detection. The initial saliency map i...
متن کاملA Bottom-Up Spatiotemporal Visual Attention Model for Video Analysis
A video analysis framework based on spatiotemporal saliency calculation is presented. We propose a novel scheme for generating saliency in video sequences by taking into account both the spatial extent and dynamic evolution of regions. Towards this goal we extend a common image-oriented computational model of saliency-based visual attention to handle spatiotemporal analysis of video in a volume...
متن کامل